learning value
Learning values across many orders of magnitude
Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.
Learning values across many orders of magnitude
Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior.
Reviews: Learning values across many orders of magnitude
I think the problem that the authors are trying to solve is a very important one that shows up in many gradient-based learning situations. The ideas are straightforward but (to my knowledge) new and apparently effective. For this reason I can see them becoming widely used. The paper is well-written and at most places clear. The related work section seems to contain enough relevant references, but it would be nice if some of the most related works would be discussed in a bit more detail.
Learning values across many orders of magnitude
Hasselt, Hado P. van, Guez, Arthur, Guez, Arthur, Hessel, Matteo, Mnih, Volodymyr, Silver, David
Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior.